Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Handwritten Chinese character recognition based on two dimensional principal component analysis and convolutional neural network
ZHENG Yanbin, HAN Mengyun, FAN Wenxin
Journal of Computer Applications    2020, 40 (8): 2465-2471.   DOI: 10.11772/j.issn.1001-9081.2020010081
Abstract449)      PDF (1282KB)(529)       Save
With the rapid growth of computing power, the accumulation of training data and the improvement of nonlinear activation function, Convolutional Neural Network (CNN) has a good recognition performance in handwritten Chinese character recognition. To solve the problem of slow speed of CNN for handwritten Chinese character recognition, Two Dimensional Principal Component Analysis (2DPCA) and CNN were combined to identify handwritten Chinese characters. Firstly, 2DPCA was used to extract the projection eigenvectors of handwritten Chinese characters. Secondly, the obtained projection eigenvectors were formed into an eigenmatrix. Thirdly, the formed eigenmatrix was used as the input of CNN. Finally, the softmax function was used for classification. Compared with the model based on AlexNet, the proposed method has the running time reduced by 78%; and compared with the model based on ACNN and DCNN, the proposed method has the running time reduced by 80% and 73%, respectively. Experimental results show that the proposed method can reduce the running time of handwritten Chinese character recognition without reducing the recognition accuracy.
Reference | Related Articles | Metrics
Multi-agent collaborative pursuit algorithm based on game theory and Q-learning
ZHENG Yanbin, FAN Wenxin, HAN Mengyun, TAO Xueli
Journal of Computer Applications    2020, 40 (6): 1613-1620.   DOI: 10.11772/j.issn.1001-9081.2019101783
Abstract482)      PDF (899KB)(728)       Save
The multi-agent collaborative pursuit problem is a typical problem in the multi-agent coordination and collaboration research. Aiming at the pursuit problem of single escaper with learning ability, a multi-agent collaborative pursuit algorithm based on game theory and Q-learning was proposed. Firstly, a cooperative pursuit team was established and a game model of cooperative pursuit was built. Secondly, through the learning of the escaper’s strategy choices, the trajectory of the escaper’s limited Step-T cumulative reward was established, and the trajectory was adjusted to the pursuer’s strategy set. Finally, the Nash equilibrium solution was obtained by solving the cooperative pursuit game, and the equilibrium strategy was executed by each agent to complete the pursuit task. At the same time, in order to solve the problem that there may be multiple equilibrium solutions, the virtual action behavior selection algorithm was added to select the optimal equilibrium strategy. C# simulation experiments show that, the proposed algorithm can effectively solve the pursuit problem of single escaper with learning ability in the obstacle environment, and the comparative analysis of experimental data shows that the pursuit efficiency of the algorithm under the same conditions is better than that of pure game or pure learning.
Reference | Related Articles | Metrics
Multi-Agent path planning algorithm based on ant colony algorithm and game theory
ZHENG Yanbin, WANG Linlin, XI Pengxue, FAN Wenxin, HAN Mengyun
Journal of Computer Applications    2019, 39 (3): 681-687.   DOI: 10.11772/j.issn.1001-9081.2018071601
Abstract1547)      PDF (1115KB)(628)       Save
A two-stage path planning algorithm was proposed for multi-Agent path planning. Firstly, an improved ant colony algorithm was used to plan an optimal path for each Agent from the starting point to the target point without colliding with the static obstacles in the environment. The reverse learning method was introduced to an improved ant colony algorithm to initialize the ant positions and increase the global search ability of the algorithm. The adaptive inertia weighted factor in the particle swarm optimization algorithm was used to adjust the pheromone intensity Q value to make it adaptively change to avoid falling into local optimum. The pheromone volatilization factor ρ was adjusted to speed up the iteration of the algorithm. Then, if there were dynamic collisions between multiple Agents, the game theory was used to construct a dynamic obstacle avoidance model between them, and the virtual action method was used to solve the game and select multiple Nash equilibria, making each Agent quickly learn the optimal Nash equilibrium. The simulation results show that the improved ant colony algorithm has a significant improvement in search accuracy and search speed compared with the traditional ant colony algorithm. And compared with Mylvaganam's multi-Agent dynamic obstacle avoidance algorithm, the proposed algorithm reduces the total path length and improves the convergence speed.
Reference | Related Articles | Metrics
Obstacle avoidance method for multi-agent formation based on artificial potential field method
ZHENG Yanbin, XI Pengxue, WANG Linlin, FAN Wenxin, HAN Mengyun
Journal of Computer Applications    2018, 38 (12): 3380-3384.   DOI: 10.11772/j.issn.1001-9081.2018051119
Abstract736)      PDF (916KB)(633)       Save
Formation obstacle avoidance is one of the key issues in the research of multi-agent formation. Concerning the obstacle avoidance problem of multi-agent formation in dynamic environment, a new formation obstacle avoidance method based on Artificial Potential Field (APF) and Cuckoo Search algorithm (CS) was proposed. Firstly, in the heterogeneous mode of dynamic formation transformation strategy, APF was used to plan obstacle avoidance for each agent in multi-agent formation. Then, in view of the limitations of APF in setting attraction increment coefficient and repulsion increment coefficient, the idea of Lěvy flight mechanism in CS was used to search randomly for the increment coefficients adapted to the environment. The simulation results of Matlab show that, the proposed method can effectively solve the obstacle avoidance problem of multi-agent formation in complex environment. The efficiency function is used to evaluate and analyze the experimental data, which can verify the rationality and effectiveness of the proposed method.
Reference | Related Articles | Metrics
Multi-Agent path planning algorithm based on hierarchical reinforcement learning and artificial potential field
ZHENG Yanbin, LI Bo, AN Deyu, LI Na
Journal of Computer Applications    2015, 35 (12): 3491-3496.   DOI: 10.11772/j.issn.1001-9081.2015.12.3491
Abstract792)      PDF (903KB)(803)       Save
Aiming at the problems of the path planning algorithm, such as slow convergence and low efficiency, a multi-Agent path planning algorithm based on hierarchical reinforcement learning and artificial potential field was proposed. Firstly, the multi-Agent operating environment was regarded as an artificial potential field, the potential energy of every point, which represented the maximal rewards obtained according to the optimal strategy, was determined by the priori knowledge. Then, the update process of strategy was limited to smaller local space or lower dimension of high-level space to enhance the performance of learning algorithm by using model learning without environment and partial update of hierarchical reinforcement learning. Finally, aiming at the problem of taxi, the simulation experiment of the proposed algorithm was done in grid environment. To close to the real environment and increase the portability of the algorithm, the proposed algorithm was verified in three-dimensional simulation environment. The experimental results show that the convergence speed of the algorithm is fast, and the convergence procedure is stable.
Reference | Related Articles | Metrics
Multi-Agent urban traffic coordination control research based on game learning
ZHENG Yanbin WANG Ning DUAN Lingyu
Journal of Computer Applications    2014, 34 (2): 601-604.  
Abstract443)      PDF (626KB)(514)       Save
The coordination problem between Agents in traffic intersections is a gambling problem. On the basis of bounded rationality, this paper tentatively made use of game learning thought to build the multi-Agent coordinate game learning algorithm. This learning coordination algorithm analyzed travelers' unreasonable behavior and corrected it to realize the urban traffic intersections unimpeded, so as to achieve regional and global transportation optimization. At last, its feasibility is verified by means of an example and simulation.
Related Articles | Metrics
Improved artificial fish swarm algorithm based on social learning mechanism
ZHENG Yanbin LIU Jingjing WANG Ning
Journal of Computer Applications    2013, 33 (05): 1305-1329.   DOI: 10.3724/SP.J.1087.2013.01305
Abstract919)      PDF (588KB)(541)       Save
The Artificial Fish Swarm Algorithm (AFSA) has low search speed and it is difficult to obtain accurate value. To solve the problems, an improved algorithm based on social learning mechanism was proposed. In the latter optimization period, the authors used convergence and divergence behaviors to improve the algorithm. The two acts had fast search speed and high optimization accuracy, meanwhile, the divergence behavior enhanced the population diversity and the ability of skipping over the local extremum. To a certain extent, the improved algorithm enhanced the search performance. The experimental results show that the proposed algorithm is feasible and efficacious.
Reference | Related Articles | Metrics
Team task allocation method for computer generated actor based on game theory
ZHENG Yanbin TAO Xueli
Journal of Computer Applications    2013, 33 (03): 793-795.   DOI: 10.3724/SP.J.1087.2013.00793
Abstract737)      PDF (475KB)(566)       Save
For the complex tasks with time constraints, which can dynamically be added to environment, a task allocation model based on game theory was established, and a task allocation method was proposed, which made Computer Generated Actor (CGA) be able to choose its actions according to the local information owned by itself, and ensured that CGA learned a strict pure strategy Nash equlilibrium quickly by using fictitious play method on behavior coordination. The simulation results show that this method is reasonable, and it can effectively solve the dynamic task allocation problem.
Reference | Related Articles | Metrics